Self-Supervised Graph Representation Learning via Information Bottleneck
نویسندگان
چکیده
Graph representation learning has become a mainstream method for processing network structured data, and most graph methods rely heavily on labeling information downstream tasks. Since labeled is rare in the real world, adopting self-supervised to solve neural problem significant challenge. Currently, existing approaches attempt maximize mutual learning, which leads large amount of redundant thus affects performance Therefore, bottleneck (SGIB) proposed this paper uses symmetry asymmetry graphs establish comparative introduces theory as loss training model. This model extracts common features both views independent each view by maximizing estimation between local high-level one global summary vector other view. It also removes not relevant target task minimizing representations two views. Based extensive experimental results three public datasets large-scale datasets, it been shown that SGIB can learn higher quality node several classical analysis experiments such classification clustering be improved compared models an unsupervised environment. In addition, in-depth experiment designed analysis, show alleviate over-smoothing certain extent. we infer from different would effective improvement tasks through introducing remove information.
منابع مشابه
Semi-supervised Data Representation via Affinity Graph Learning
We consider the general problem of utilizing both labeled and unlabeled data to improve data representation performance. A new semi-supervised learning framework is proposed by combing manifold regularization and data representation methods such as Non negative matrix factorization and sparse coding. We adopt unsupervised data representation methods as the learning machines because they do not ...
متن کاملReblur2Deblur: Deblurring Videos via Self-Supervised Learning
Motion blur is a fundamental problem in computer vision as it impacts image quality and hinders inference. Traditional deblurring algorithms leverage the physics of the image formation model and use hand-crafted priors: they usually produce results that better reflect the underlying scene, but present artifacts. Recent learning-based methods implicitly extract the distribution of natural images...
متن کاملDeblocking Joint Photographic Experts Group Compressed Images via Self-learning Sparse Representation
JPEG is one of the most widely used image compression method, but it causes annoying blocking artifacts at low bit-rates. Sparse representation is an efficient technique which can solve many inverse problems in image processing applications such as denoising and deblocking. In this paper, a post-processing method is proposed for reducing JPEG blocking effects via sparse representation. In this ...
متن کاملSupervised Hashing for Image Retrieval via Image Representation Learning
Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity/dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector...
متن کاملSemi-supervised Clustering for Short Text via Deep Representation Learning
In this work, we propose a semi-supervised method for short text clustering, where we represent texts as distributed vectors with neural networks, and use a small amount of labeled data to specify our intention for clustering. We design a novel objective to combine the representation learning process and the kmeans clustering process together, and optimize the objective with both labeled data a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Symmetry
سال: 2022
ISSN: ['0865-4824', '2226-1877']
DOI: https://doi.org/10.3390/sym14040657